deep belief network
Automatic Extraction of Road Networks by using Teacher-Student Adaptive Structural Deep Belief Network and Its Application to Landslide Disaster
Kamada, Shin, Ichimura, Takumi
Abstract--An adaptive structural learning method of Restricted Boltzmann Machine (RBM) and Deep Belief Network (DBN) has been developed as one of prominent deep learning models. The neuron generation-annihilation algorithm in R BM and layer generation algorithm in DBN make an optimal networ k structure for given input during the learning. In this paper, our model is applied to an automatic recognition method of road network system, called RoadTracer . A novel method of RoadTracer using the T eacher-Student base d ensemble learning model of Adaptive DBN is proposed, since t he road maps contain many complicated features so that a model with high representation power to detect should be required . The experimental results showed the detection accuracy of t he proposed model was improved from 40.0% to 89.0% on average in the seven major cities among the test dataset. In addition, we challenged to apply our method to the detection of availab le roads when landslide by natural disaster is occurred, in ord er to rapidly obtain a way of transportation. For fast inferenc e, a small size of the trained model was implemented on a small embedded edge device as lightweight deep learning. Recently there have been more cases of extreme climate events including unexpected and unusual weather. The atten - tion of these events has been received in the last few years, d ue to the significant loss of human lives and escalating economi c costs, as well as the impacts on landslides and changes in ecosystems. In Japan, the Japan Meteorological Agency (JMA) has issued "Climate Change Monitoring Report" every year informing the latest status of climate change. According to [1 ], during the Heavy Rain Event of July 2018, Japan experienced unprecedented heavy rainfall. Overall precipitation obse rved at AMeDAS stations throughout Japan in July 2018 was extremely high in comparison with past heavy rainfall event s since 1982. A prominent characteristic of this rain event is that the record-breaking local precipitation, particularly wi thin 48 to 72 hours, was observed extensively over western Japan and Tokyo region, including the Seto Inland Sea side of Chugoku and Shikoku regions. S. Kamada is with Hiroshima City University, Hiroshima, Jap an T. Ichimura is with Prefectural University of Hiroshima, Hi roshima, Japan In addition, lifelines such as wat er supply and communications damaged, and traffic obstacles occurred over a wide area. Due to the disruption of major roads and railroads, the supply was also suspended.
- Transportation > Ground > Road (0.84)
- Transportation > Infrastructure & Services (0.70)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California (0.04)
Top-Down Regularization of Deep Belief Networks
Designing a principled and effective algorithm for learning deep architectures is a challenging problem. The current approach involves two training phases: a fully unsupervised learning followed by a strongly discriminative optimization. We suggest a deep learning strategy that bridges the gap between the two phases, resulting in a three-phase learning procedure. We propose to implement the scheme using a method to regularize deep belief networks with top-down information. The network is constructed from building blocks of restricted Boltzmann machines learned by combining bottom-up and top-down sampled signals. A global optimization procedure that merges samples from a forward bottom-up pass and a top-down pass is used.
A survey on learning models of spiking neural membrane systems and spiking neural networks
Paul, Prithwineel, Sosik, Petr, Ciencialova, Lucie
Spiking neural networks (SNN) are a biologically inspired model of neural networks with certain brain-like properties. In the past few decades, this model has received increasing attention in computer science community, owing also to the successful phenomenon of deep learning. In SNN, communication between neurons takes place through the spikes and spike trains. This differentiates these models from the ``standard'' artificial neural networks (ANN) where the frequency of spikes is replaced by real-valued signals. Spiking neural P systems (SNPS) can be considered a branch of SNN based more on the principles of formal automata, with many variants developed within the framework of the membrane computing theory. In this paper, we first briefly compare structure and function, advantages and drawbacks of SNN and SNPS. A key part of the article is a survey of recent results and applications of machine learning and deep learning models of both SNN and SNPS formalisms.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > Macao (0.14)
- North America > United States > New York (0.04)
- (21 more...)
- Research Report (1.00)
- Overview (1.00)
Learning to Align from Scratch Gary B. Huang 1 Honglak Lee
Unsupervised joint alignment of images has been demonstrated to improve performance on recognition tasks such as face verification. Such alignment reduces undesired variability due to factors such as pose, while only requiring weak supervision in the form of poorly aligned examples. However, prior work on unsupervised alignment of complex, real-world images has required the careful selection of feature representation based on hand-crafted image descriptors, in order to achieve an appropriate, smooth optimization landscape. In this paper, we instead propose a novel combination of unsupervised joint alignment with unsupervised feature learning. Specifically, we incorporate deep learning into the congealing alignment framework.
- North America > United States > Michigan > Washtenaw County > Ann Arbor (0.14)
- North America > United States > Massachusetts > Hampshire County > Amherst (0.14)
- Asia > Japan > Honshū > Kansai > Kyoto Prefecture > Kyoto (0.05)
- (2 more...)
A Better Way to Pretrain Deep Boltzmann Machines
We describe how the pretraining algorithm for Deep Boltzmann Machines (DBMs) is related to the pretraining algorithm for Deep Belief Networks and we show that under certain conditions, the pretraining procedure improves the variational lower bound of a two-hidden-layer DBM. Based on this analysis, we develop a different method of pretraining DBMs that distributes the modelling work more evenly over the hidden layers. Our results on the MNIST and NORB datasets demonstrate that the new pretraining algorithm allows us to learn better generative models.
- North America > Canada > Ontario > Toronto (0.29)
- North America > United States > New York (0.04)
Top-Down Regularization of Deep Belief Networks
Designing a principled and effective algorithm for learning deep architectures is a challenging problem. The current approach involves two training phases: a fully unsupervised learning followed by a strongly discriminative optimization. We suggest a deep learning strategy that bridges the gap between the two phases, resulting in a three-phase learning procedure. We propose to implement the scheme using a method to regularize deep belief networks with top-down information. The network is constructed from building blocks of restricted Boltzmann machines learned by combining bottom-up and top-down sampled signals. A global optimization procedure that merges samples from a forward bottom-up pass and a top-down pass is used.
Statistical Model Criticism using Kernel Two Sample Tests
We propose an exploratory approach to statistical model criticism using maximum mean discrepancy (MMD) two sample tests. Typical approaches to model criticism require a practitioner to select a statistic by which to measure discrepancies between data and a statistical model. MMD two sample tests are instead constructed as an analytic maximisation over a large space of possible statistics and therefore automatically select the statistic which most shows any discrepancy. We demonstrate on synthetic data that the selected statistic, called the witness function, can be used to identify where a statistical model most misrepresents the data it was trained on. We then apply the procedure to real data where the models being assessed are restricted Boltzmann machines, deep belief networks and Gaussian process regression and demonstrate the ways in which these models fail to capture the properties of the data they are trained on.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.14)
- North America > United States > New York > New York County > New York City (0.04)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > California (0.04)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks > Deep Learning (0.87)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.67)
Comprehensive Exploration of Synthetic Data Generation: A Survey
Bauer, André, Trapp, Simon, Stenger, Michael, Leppich, Robert, Kounev, Samuel, Leznik, Mark, Chard, Kyle, Foster, Ian
Recent years have witnessed a surge in the popularity of Machine Learning (ML), applied across diverse domains. However, progress is impeded by the scarcity of training data due to expensive acquisition and privacy legislation. Synthetic data emerges as a solution, but the abundance of released models and limited overview literature pose challenges for decision-making. This work surveys 417 Synthetic Data Generation (SDG) models over the last decade, providing a comprehensive overview of model types, functionality, and improvements. Common attributes are identified, leading to a classification and trend analysis. The findings reveal increased model performance and complexity, with neural network-based approaches prevailing, except for privacy-preserving data generation. Computer vision dominates, with GANs as primary generative models, while diffusion models, transformers, and RNNs compete. Implications from our performance evaluation highlight the scarcity of common metrics and datasets, making comparisons challenging. Additionally, the neglect of training and computational costs in literature necessitates attention in future research. This work serves as a guide for SDG model selection and identifies crucial areas for future exploration.
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.13)
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > Canada > Ontario > Toronto (0.04)
- (12 more...)
- Research Report > New Finding (1.00)
- Overview (1.00)
- Media > Music (1.00)
- Leisure & Entertainment > Games > Computer Games (1.00)
- Law (1.00)
- (6 more...)
Sparse Feature Learning for Deep Belief Networks
Unsupervised learning algorithms aim to discover the structure hidden in the data, and to learn representations that are more suitable as input to a supervised machine than the raw input. Many unsupervised methods are based on reconstructing the input from the representation, while constraining the representation to have certain desirable properties (e.g. Others are based on approximating density by stochastically reconstructing the input from the representation. We describe a novel and efficient algorithm to learn sparse representations, and compare it theoretically and experimentally with a similar machines trained probabilistically, namely a Restricted Boltzmann Machine. We propose a simple criterion to compare and select different unsupervised machines based on the trade-off between the reconstruction error and the information content of the representation.